To address the problems of limited information expression, imbalance, and dynamic spatio-temporal characteristics of accident data, an accident prediction model fusing heterogeneous traffic situations was proposed. In which, the semantic enhancement was completed by the spatio-temporal state aggregation module through traffic events and weather features representing dynamic traffic situations, and the historical multi-period spatio-temporal states of four types of regions (single region, adjacent region, similar region, and global region) were aggregated; the dynamic local and global spatio-temporal characteristics of accident data were captured by the spatio-temporal relation capture module from both micro- and macro-perspectives; and the multi-region and multi-angle spatio-temporal states were further fused by the spatio-temporal data fusion module, and the accident prediction task in the next period was realized. Experimental results on five city datasets of US-Accident demonstrate that the average F1-scores of the proposed model for accident, non-accident, and weighted average samples are 85.6%, 86.4%, and 86.6% respectively, which are improved by 14.4%, 5.6%, and 9.3% in the three metrics compared to the traditional Feedforward Neural Network (FNN), indicating that the proposed model can effectively suppresses the influence of accident data imbalance on experimental results. Constructing an efficient accident prediction model helps to analyze the safety situation of road traffic, reduce the occurrence of traffic accidents and improve the traffic safety.
In order to solve the problem that it is difficult to obtain a composite service with high overall performance in a large-scale Web service environment, a large-scale Web service composition method was proposed. Firstly, Document Object Model (DOM) was used to parse the user demand document in XML format to generate an abstract Web service composition sequence. Secondly, the service topic model was used for service filtering, and Top-k specific Web services were selected for each abstract Web service to reduce the composition space. Thirdly, in order to improve the quality and efficiency of service composition, an Optimized Grey Wolf Optimizer based on Logistic chaotic map and Nonlinear convergence factor (OGWO/LN) was proposed to select the optimal service composition plan. In this algorithm, chaotic map was used to generate the initial population for increasing the diversity of service composition plans and avoiding multiple local optimizations. At the same time, a nonlinear convergence factor was proposed to improve the optimization performance of the algorithm by adjusting the algorithm search ability. Finally, OGWO/LN was realized in a parallel way by MapReduce framework. Experimental results on real datasets show that compared with algorithms such as IFOA4WSC (Improved Fruit Fly Optimization Algorithm for Web Service Composition), MR-IDPSO (MapReduce based on Improved Discrete Particle Swarm Optimization) and MR-GA (MapReduce based on Genetic Algorithm), the proposed algorithm has the average fitness value increased by 8.69%, 7.94% and 12.25% respectively, and has better optimization performance and stability in solving the problem of large-scale Web service composition.
The classic super-resolution algorithm via sparse coding has high computational cost during the reconstruction phase. In view of the disadvantages, a predictive sparse coding-based single image super-resolution method was proposed. In the training phase, the proposed method imposed a code prediction error term to the traditional sparse coding error function, and used an alternating minimization procedure to minimize the resultant objective function. In the testing phase, the reconstruction coefficient could be estimated by simply multiplying the low-dimensional image patch with the low-dimensional dictionary, without any need to solve sparse regression problems. The experimental results demonstrate that, compared with the classic single image super-resolution algorithm via sparse coding, the proposed method is able to significantly reduce the reconstruction time while maintaining super-resolution visual effect.
To solve the problem of low resource utilization in community health center, little contact between community health center and community residents, and difficulty for residents to participate in personal health management and medical care, an intelligent healthy community system was developed. With the increasing popular mobile devices, the system provided support for health record management, chronic disease management, immunization, appointment registration, medical information query and other services in community health center. It realized the data sharing and interaction among smart phones, tablet PCs and Hospital Information System (HIS), which allowed the residents to actively participate in personal health management. Now the system has been deployed in one community health center of Chengdu, it makes community residents convient to manage their personal health, and improves the work efficiency and service quality of community health center.
In three-dimensional sound reproduction with two speakers, Crosstalk Cancellation System (CCS) performance optimization often pay more attention to the effect independently by the factors such as inverse filter parameters design and loudspeaker configuration. A frequency-domain Least-Squares (LS) estimation approximation was proposed to use for the performance optimization. The relationship between these factors and their effect on CCS performance was evaluated systematically. To achieve the tradeoff of computing efficiency and system performance of crosstalk cancellation algorithm, this method obtained the optimization parameters. The effect of crosstalk cancellation was evaluated with Channel Separation (CS) and Performance Error (PE) index, and the simulation results indicate that these parameters can obtain good crosstalk cancellation effect.
With the increasing amount of data and expanding of application demand, the efficient organizational management and rapid processing speed of remote sensing data have become a bottleneck in the application of remote sensing technology. The earth partition theory and high performance computing provide a possible way to solve the above problem. Combined with global partition model, the conceptual model and data model of partition facet template were proposed based on partition facet of remote sensing image. A computing mode of partition facets based on templates was designed, and a small partition template database was established. A specific example of partition image data template applications was also given for validation. The experimental results demonstrate the feasibility of the data model and improve the efficiency of targets retrieval.
Bernstein’s Batch-factor algorithm can test B-smoothness of a lot of integers in a short time. But this method costs so much memory that it’s widely used in theory analyses but rarely used in practice. Based on splitting product of primes into pieces, a hierarchical batch-factor algorithm cloud framework was proposed to solve this problem. This hierarchical framework made the development clear and easy, and could be easily moved to other architectures; Cloud computing framework borrowed from MapReduce made use of services provided by cloud clients such as distribute memory, share memory and message to carry out mapping of splitting-primes batch factor algorithm, which solved the great cost of Bernstein’s method. Experiments show that, this framework is with good scalability and can be adapted to different sizes batch factor in which the scale of prime product varies from 1.5GB to 192GB, which enhances the usefulness of the algorithm significantly.
To improve the storage efficiency of syntax, FLPM-B* tree, a full link pointer module B* tree, was designed. According to its structure characteristics, some algorithms such as module insert algorithm, reconstruction algorithm, and partition algorithm, were put forward. These algorithms make the FLPM-B* tree manipulated and efficient.